42 research outputs found

    On the Dynamic Relationship between Inequality and Economic Growth

    Get PDF

    Neural correlates of confusability in recognition of morphologically complex Korean words

    Get PDF
    When people confuse and reject a non-word that is created by switching two adjacent letters from an actual word, is called the transposition confusability effect (TCE). The TCE is known to occur at the very early stages of visual word recognition with such unit exchange as letters or syllables, but little is known about the brain mechanisms of TCE. In this study, we examined the neural correlates of TCE and the effect of a morpheme boundary placement on TCE. We manipulated the placement of a morpheme boundary by exchanging places of two syllables embedded in Korean morphologically complex words made up of lexical morpheme and grammatical morpheme. In the two experimental conditions, the transposition syllable within-boundary condition (TSW) involved exchanging two syllables within the same morpheme, whereas the across-boundary condition (TSA) involved the exchange of syllables across the stem and grammatical morpheme boundary. During fMRI, participants performed the lexical decision task. Behavioral results revealed that the TCE was found in TSW condition, and the morpheme boundary, which is manipulated in TSA, modulated the TCE. In the fMRI results, TCE induced activation in the left inferior parietal lobe (IPL) and intraparietal sulcus (IPS). The IPS activation was specific to a TCE and its strength of activation was associated with task performance. Furthermore, two functional networks were involved in the TCE: the central executive network and the dorsal attention network. Morpheme boundary modulation suppressed the TCE by recruiting the prefrontal and temporal regions, which are the key regions involved in semantic processing. Our findings propose the role of the dorsal visual pathway in syllable position processing and that its interaction with other higher cognitive systems is modulated by the morphological boundary in the early phases of visual word recognition

    High performance Ge nanowire anode sheathed with carbon for lithium rechargeable batteries

    Get PDF
    We present a single crystalline Ge nanowire anode material sheathed with carbon prepared by a solid-liquid solution method. The composite electrode composed of Ge nanowires shows impressive electrochemical properties, exhibiting a very high reversible charge capacity (after lithium removal) of 963 mA h g(-1) with a coulombic efficiency of 91%.close11810

    Sustainable and recyclable super engineering thermoplastic from biorenewable monomer

    Get PDF
    Environmental and health concerns force the search for sustainable super engineering plastics (SEPs) that utilise bio-derived cyclic monomers, e.g. isosorbide instead of restricted petrochemicals. However, previously reported bio-derived thermosets or thermoplastics rarely offer thermal/mechanical properties, scalability, or recycling that match those of petrochemical SEPs. Here we use a phase transfer catalyst to synthesise an isosorbide-based polymer with a high molecular weight >100 kg mol−1, which is reproducible at a 1-kg-scale production. It is transparent and solvent/melt-processible for recycling, with a glass transition temperature of 212 °C, a tensile strength of 78 MPa, and a thermal expansion coefficient of 23.8 ppm K−1. Such a performance combination has not been reported before for bio-based thermoplastics, petrochemical SEPs, or thermosets. Interestingly, quantum chemical simulations show the alicyclic bicyclic ring structure of isosorbide imposes stronger geometric restraint to polymer chain than the aromatic group of bisphenol-A.11Ysciescopu

    REFUGE Challenge: A unified framework for evaluating automated methods for glaucoma assessment from fundus photographs

    Full text link
    [EN] Glaucoma is one of the leading causes of irreversible but preventable blindness in working age populations. Color fundus photography (CFP) is the most cost-effective imaging modality to screen for retinal disorders. However, its application to glaucoma has been limited to the computation of a few related biomarkers such as the vertical cup-to-disc ratio. Deep learning approaches, although widely applied for medical image analysis, have not been extensively used for glaucoma assessment due to the limited size of the available data sets. Furthermore, the lack of a standardize benchmark strategy makes difficult to compare existing methods in a uniform way. In order to overcome these issues we set up the Retinal Fundus Glaucoma Challenge, REFUGE (https://refuge.grand-challenge.org), held in conjunction with MIC-CAI 2018. The challenge consisted of two primary tasks, namely optic disc/cup segmentation and glaucoma classification. As part of REFUGE, we have publicly released a data set of 1200 fundus images with ground truth segmentations and clinical glaucoma labels, currently the largest existing one. We have also built an evaluation framework to ease and ensure fairness in the comparison of different models, encouraging the development of novel techniques in the field. 12 teams qualified and participated in the online challenge. This paper summarizes their methods and analyzes their corresponding results. In particular, we observed that two of the top-ranked teams outperformed two human experts in the glaucoma classification task. Furthermore, the segmentation results were in general consistent with the ground truth annotations, with complementary outcomes that can be further exploited by ensembling the results.This work was supported by the Christian Doppler Research Association, the Austrian Federal Ministry for Digital and Economic Affairs and the National Foundation for Research, Technology and Development, J.I.O is supported by WWTF (Medical University of Vienna: AugUniWien/FA7464A0249, University of Vienna: VRG12- 009). Team Masker is supported by Natural Science Foundation of Guangdong Province of China (Grant 2017A030310647). Team BUCT is partially supported by the National Natural Science Foundation of China (Grant 11571031). The authors would also like to thank REFUGE study group for collaborating with this challenge.Orlando, JI.; Fu, H.; Breda, JB.; Van Keer, K.; Bathula, DR.; Diaz-Pinto, A.; Fang, R.... (2020). REFUGE Challenge: A unified framework for evaluating automated methods for glaucoma assessment from fundus photographs. Medical Image Analysis. 59:1-21. https://doi.org/10.1016/j.media.2019.101570S12159Abramoff, M. D., Garvin, M. K., & Sonka, M. (2010). Retinal Imaging and Image Analysis. IEEE Reviews in Biomedical Engineering, 3, 169-208. doi:10.1109/rbme.2010.2084567Abràmoff, M. D., Lavin, P. T., Birch, M., Shah, N., & Folk, J. C. (2018). Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. npj Digital Medicine, 1(1). doi:10.1038/s41746-018-0040-6Al-Bander, B., Williams, B., Al-Nuaimy, W., Al-Taee, M., Pratt, H., & Zheng, Y. (2018). Dense Fully Convolutional Segmentation of the Optic Disc and Cup in Colour Fundus for Glaucoma Diagnosis. Symmetry, 10(4), 87. doi:10.3390/sym10040087Almazroa, A., Burman, R., Raahemifar, K., & Lakshminarayanan, V. (2015). Optic Disc and Optic Cup Segmentation Methodologies for Glaucoma Image Detection: A Survey. Journal of Ophthalmology, 2015, 1-28. doi:10.1155/2015/180972Burlina, P. M., Joshi, N., Pekala, M., Pacheco, K. D., Freund, D. E., & Bressler, N. M. (2017). Automated Grading of Age-Related Macular Degeneration From Color Fundus Images Using Deep Convolutional Neural Networks. JAMA Ophthalmology, 135(11), 1170. doi:10.1001/jamaophthalmol.2017.3782Carmona, E. J., Rincón, M., García-Feijoó, J., & Martínez-de-la-Casa, J. M. (2008). Identification of the optic nerve head with genetic algorithms. Artificial Intelligence in Medicine, 43(3), 243-259. doi:10.1016/j.artmed.2008.04.005Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). SMOTE: Synthetic Minority Over-sampling Technique. Journal of Artificial Intelligence Research, 16, 321-357. doi:10.1613/jair.953Christopher, M., Belghith, A., Bowd, C., Proudfoot, J. A., Goldbaum, M. H., Weinreb, R. N., … Zangwill, L. M. (2018). Performance of Deep Learning Architectures and Transfer Learning for Detecting Glaucomatous Optic Neuropathy in Fundus Photographs. Scientific Reports, 8(1). doi:10.1038/s41598-018-35044-9De Fauw, J., Ledsam, J. R., Romera-Paredes, B., Nikolov, S., Tomasev, N., Blackwell, S., … Ronneberger, O. (2018). Clinically applicable deep learning for diagnosis and referral in retinal disease. Nature Medicine, 24(9), 1342-1350. doi:10.1038/s41591-018-0107-6Decencière, E., Zhang, X., Cazuguel, G., Lay, B., Cochener, B., Trone, C., … Klein, J.-C. (2014). FEEDBACK ON A PUBLICLY DISTRIBUTED IMAGE DATABASE: THE MESSIDOR DATABASE. Image Analysis & Stereology, 33(3), 231. doi:10.5566/ias.1155DeLong, E. R., DeLong, D. M., & Clarke-Pearson, D. L. (1988). Comparing the Areas under Two or More Correlated Receiver Operating Characteristic Curves: A Nonparametric Approach. Biometrics, 44(3), 837. doi:10.2307/2531595European Glaucoma Society Terminology and Guidelines for Glaucoma, 4th Edition - Part 1Supported by the EGS Foundation. (2017). British Journal of Ophthalmology, 101(4), 1-72. doi:10.1136/bjophthalmol-2016-egsguideline.001Farbman, Z., Fattal, R., Lischinski, D., & Szeliski, R. (2008). Edge-preserving decompositions for multi-scale tone and detail manipulation. ACM Transactions on Graphics, 27(3), 1-10. doi:10.1145/1360612.1360666Fu, H., Cheng, J., Xu, Y., Wong, D. W. K., Liu, J., & Cao, X. (2018). Joint Optic Disc and Cup Segmentation Based on Multi-Label Deep Network and Polar Transformation. IEEE Transactions on Medical Imaging, 37(7), 1597-1605. doi:10.1109/tmi.2018.2791488Gómez-Valverde, J. J., Antón, A., Fatti, G., Liefers, B., Herranz, A., Santos, A., … Ledesma-Carbayo, M. J. (2019). Automatic glaucoma classification using color fundus images based on convolutional neural networks and transfer learning. Biomedical Optics Express, 10(2), 892. doi:10.1364/boe.10.000892Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., … Webster, D. R. (2016). Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA, 316(22), 2402. doi:10.1001/jama.2016.17216Hagiwara, Y., Koh, J. E. W., Tan, J. H., Bhandary, S. V., Laude, A., Ciaccio, E. J., … Acharya, U. R. (2018). Computer-aided diagnosis of glaucoma using fundus images: A review. Computer Methods and Programs in Biomedicine, 165, 1-12. doi:10.1016/j.cmpb.2018.07.012Haleem, M. S., Han, L., van Hemert, J., & Li, B. (2013). Automatic extraction of retinal features from colour retinal images for glaucoma diagnosis: A review. Computerized Medical Imaging and Graphics, 37(7-8), 581-596. doi:10.1016/j.compmedimag.2013.09.005Holm, S., Russell, G., Nourrit, V., & McLoughlin, N. (2017). DR HAGIS—a fundus image database for the automatic extraction of retinal surface vessels from diabetic patients. Journal of Medical Imaging, 4(1), 014503. doi:10.1117/1.jmi.4.1.014503Joshi, G. D., Sivaswamy, J., & Krishnadas, S. R. (2011). Optic Disk and Cup Segmentation From Monocular Color Retinal Images for Glaucoma Assessment. IEEE Transactions on Medical Imaging, 30(6), 1192-1205. doi:10.1109/tmi.2011.2106509Kaggle, 2015. Diabetic Retinopathy Detection. https://www.kaggle.com/c/diabetic-retinopathy-detection. [Online; accessed 10-January-2019].Kumar, J. R. H., Seelamantula, C. S., Kamath, Y. S., & Jampala, R. (2019). Rim-to-Disc Ratio Outperforms Cup-to-Disc Ratio for Glaucoma Prescreening. Scientific Reports, 9(1). doi:10.1038/s41598-019-43385-2Lavinsky, F., Wollstein, G., Tauber, J., & Schuman, J. S. (2017). The Future of Imaging in Detecting Glaucoma Progression. Ophthalmology, 124(12), S76-S82. doi:10.1016/j.ophtha.2017.10.011Lecun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324. doi:10.1109/5.726791Li, Z., He, Y., Keel, S., Meng, W., Chang, R. T., & He, M. (2018). Efficacy of a Deep Learning System for Detecting Glaucomatous Optic Neuropathy Based on Color Fundus Photographs. Ophthalmology, 125(8), 1199-1206. doi:10.1016/j.ophtha.2018.01.023Litjens, G., Kooi, T., Bejnordi, B. E., Setio, A. A. A., Ciompi, F., Ghafoorian, M., … Sánchez, C. I. (2017). A survey on deep learning in medical image analysis. Medical Image Analysis, 42, 60-88. doi:10.1016/j.media.2017.07.005Liu, S., Graham, S. L., Schulz, A., Kalloniatis, M., Zangerl, B., Cai, W., … You, Y. (2018). A Deep Learning-Based Algorithm Identifies Glaucomatous Discs Using Monoscopic Fundus Photographs. Ophthalmology Glaucoma, 1(1), 15-22. doi:10.1016/j.ogla.2018.04.002Lowell, J., Hunter, A., Steel, D., Basu, A., Ryder, R., Fletcher, E., & Kennedy, L. (2004). Optic Nerve Head Segmentation. IEEE Transactions on Medical Imaging, 23(2), 256-264. doi:10.1109/tmi.2003.823261Maier-Hein, L., Eisenmann, M., Reinke, A., Onogur, S., Stankovic, M., Scholz, P., … Kopp-Schneider, A. (2018). Why rankings of biomedical image analysis competitions should be interpreted with care. Nature Communications, 9(1). doi:10.1038/s41467-018-07619-7Miri, M. S., Abramoff, M. D., Lee, K., Niemeijer, M., Wang, J.-K., Kwon, Y. H., & Garvin, M. K. (2015). Multimodal Segmentation of Optic Disc and Cup From SD-OCT and Color Fundus Photographs Using a Machine-Learning Graph-Based Approach. IEEE Transactions on Medical Imaging, 34(9), 1854-1866. doi:10.1109/tmi.2015.2412881Niemeijer, M., van Ginneken, B., Cree, M. J., Mizutani, A., Quellec, G., Sanchez, C. I., … Abramoff, M. D. (2010). Retinopathy Online Challenge: Automatic Detection of Microaneurysms in Digital Color Fundus Photographs. IEEE Transactions on Medical Imaging, 29(1), 185-195. doi:10.1109/tmi.2009.2033909Odstrcilik, J., Kolar, R., Budai, A., Hornegger, J., Jan, J., Gazarek, J., … Angelopoulou, E. (2013). Retinal vessel segmentation by improved matched filtering: evaluation on a new high‐resolution fundus image database. IET Image Processing, 7(4), 373-383. doi:10.1049/iet-ipr.2012.0455Orlando, J. I., Prokofyeva, E., & Blaschko, M. B. (2017). A Discriminatively Trained Fully Connected Conditional Random Field Model for Blood Vessel Segmentation in Fundus Images. IEEE Transactions on Biomedical Engineering, 64(1), 16-27. doi:10.1109/tbme.2016.2535311Park, S. J., Shin, J. Y., Kim, S., Son, J., Jung, K.-H., & Park, K. H. (2018). A Novel Fundus Image Reading Tool for Efficient Generation of a Multi-dimensional Categorical Image Database for Machine Learning Algorithm Training. Journal of Korean Medical Science, 33(43). doi:10.3346/jkms.2018.33.e239Poplin, R., Varadarajan, A. V., Blumer, K., Liu, Y., McConnell, M. V., Corrado, G. S., … Webster, D. R. (2018). Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nature Biomedical Engineering, 2(3), 158-164. doi:10.1038/s41551-018-0195-0Porwal, P., Pachade, S., Kamble, R., Kokare, M., Deshmukh, G., Sahasrabuddhe, V., & Meriaudeau, F. (2018). Indian Diabetic Retinopathy Image Dataset (IDRiD): A Database for Diabetic Retinopathy Screening Research. Data, 3(3), 25. doi:10.3390/data3030025Prokofyeva, E., & Zrenner, E. (2012). Epidemiology of Major Eye Diseases Leading to Blindness in Europe: A Literature Review. Ophthalmic Research, 47(4), 171-188. doi:10.1159/000329603Raghavendra, U., Fujita, H., Bhandary, S. V., Gudigar, A., Tan, J. H., & Acharya, U. R. (2018). Deep convolution neural network for accurate diagnosis of glaucoma using digital fundus images. Information Sciences, 441, 41-49. doi:10.1016/j.ins.2018.01.051Reis, A. S. C., Sharpe, G. P., Yang, H., Nicolela, M. T., Burgoyne, C. F., & Chauhan, B. C. (2012). Optic Disc Margin Anatomy in Patients with Glaucoma and Normal Controls with Spectral Domain Optical Coherence Tomography. Ophthalmology, 119(4), 738-747. doi:10.1016/j.ophtha.2011.09.054Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., … Fei-Fei, L. (2015). ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 115(3), 211-252. doi:10.1007/s11263-015-0816-ySchmidt-Erfurth, U., Sadeghipour, A., Gerendas, B. S., Waldstein, S. M., & Bogunović, H. (2018). Artificial intelligence in retina. Progress in Retinal and Eye Research, 67, 1-29. doi:10.1016/j.preteyeres.2018.07.004Sevastopolsky, A. (2017). Optic disc and cup segmentation methods for glaucoma detection with modification of U-Net convolutional neural network. Pattern Recognition and Image Analysis, 27(3), 618-624. doi:10.1134/s1054661817030269Taha, A. A., & Hanbury, A. (2015). Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool. BMC Medical Imaging, 15(1). doi:10.1186/s12880-015-0068-xThakur, N., & Juneja, M. (2018). Survey on segmentation and classification approaches of optic cup and optic disc for diagnosis of glaucoma. Biomedical Signal Processing and Control, 42, 162-189. doi:10.1016/j.bspc.2018.01.014Tham, Y.-C., Li, X., Wong, T. Y., Quigley, H. A., Aung, T., & Cheng, C.-Y. (2014). Global Prevalence of Glaucoma and Projections of Glaucoma Burden through 2040. Ophthalmology, 121(11), 2081-2090. doi:10.1016/j.ophtha.2014.05.013Johnson, S. S., Wang, J.-K., Islam, M. S., Thurtell, M. J., Kardon, R. H., & Garvin, M. K. (2018). Local Estimation of the Degree of Optic Disc Swelling from Color Fundus Photography. Lecture Notes in Computer Science, 277-284. doi:10.1007/978-3-030-00949-6_33Trucco, E., Ruggeri, A., Karnowski, T., Giancardo, L., Chaum, E., Hubschman, J. P., … Dhillon, B. (2013). Validating Retinal Fundus Image Analysis Algorithms: Issues and a Proposal. Investigative Opthalmology & Visual Science, 54(5), 3546. doi:10.1167/iovs.12-10347Vergara, I. A., Norambuena, T., Ferrada, E., Slater, A. W., & Melo, F. (2008). StAR: a simple tool for the statistical comparison of ROC curves. BMC Bioinformatics, 9(1). doi:10.1186/1471-2105-9-265Wu, Z., Shen, C., & van den Hengel, A. (2019). Wider or Deeper: Revisiting the ResNet Model for Visual Recognition. Pattern Recognition, 90, 119-133. doi:10.1016/j.patcog.2019.01.006Zheng, Y., Hijazi, M. H. A., & Coenen, F. (2012). Automated «Disease/No Disease» Grading of Age-Related Macular Degeneration by an Image Mining Approach. Investigative Opthalmology & Visual Science, 53(13), 8310. doi:10.1167/iovs.12-957

    A numerical study on turbocharging system for PFI-SI type hydrogen combustion engine

    No full text
    The hydrogen internal combustion engine (H2ICE) has received increasing attention in various industry sectors as it produces nearly zero carbon emissions. However, it has been reported that the power output is lower than the gasoline engine especially for port fuel injection (PFI) type hydrogen engines. It is mainly due to low density of the hydrogen which reduces volumetric efficiency. A turbocharging system can improve the power output by pushing more air into the combustion chamber. However, it was observed that incorrect matching hampers the increment of the power output which results in low specific power (<30kW/L). To achieve the equivalent performance of a turbocharged PFI gasoline engine, the required boosting system for the PFI H2ICE has been numerically investigated using 1D engine simulation. As a base engine, a 1.6L turbocharged PFI gasoline engine was used. The validated base engine model was modified for the hydrogen operation and the simulation was carried out at wide open throttle (WOT) from 1000 to 4000 RPM under the equivalence ratio (f) of 0.55. It was identified that the PFI H2ICE requires 50% higher mass flow and 90% higher boost pressure against the turbocharged gasoline engine. A single-stage charging system is not able to supply the required boost and mass flow over the wide range of operation. Instead, a two-stage boosting system with VGT at high pressure stage could deliver such a high boost and mass flow. The boost and mass flow demand are mainly influenced by the operational lambda (?) and target performance which should be considered in designing the boosting system for the PFI SI type H2ICE

    Mapping the Neural Dynamics of Korean–English Bilinguals With Medium Proficiency During Auditory Word Processing

    Get PDF
    Bilingualism is a worldwide phenomenon and provides an opportunity to understand how the brain represents language processing. Although many studies have investigated the neural mechanism of bilingualism, it still remain unclear how brain systems are involved in the second language processing. Here, we examined the neural dynamics of bilinguals with medium proficiency during auditory word processing. Korean–English (K–E) bilinguals were recruited for the study (L1: Korean and L2: English). They performed a word comprehension task on phonological and semantic aspects by hearing words. We compared their task performance, task-induced regional activity, and functional connectivity (FC) between L1 and L2 processing. Brain activation analyses revealed that L2 evoked more widespread and stronger activation in brain regions involved in auditory word processing and the increased regional activity in L2 was prominent during phonological processing. Moreover, L2 evoked up-regulation during semantic processing was associated with L2 proficiency. FC analyses demonstrated that the intra-network connectivity showed stronger in the language network (LN), dorsal attention network (DAN), and default mode network (DMN) in L2 than L1. For the L2 phonological processing, the increased FC within the DAN was positively correlated with individuals’ L2 proficiency. Also, L2 semantic processing induced the enhanced internetwork connectivity between the LN and DMN. Our findings suggest that L2 processing in K–E bilinguals induces dynamic changes in the brain at a regional and network-level and FC analysis can disentangle the different networks involvement in L2 auditory word processing according to two key features: phonology and semantics
    corecore